198 research outputs found

    Image Restoration: A General Wavelet Frame Based Model and Its Asymptotic Analysis

    Full text link
    Image restoration is one of the most important areas in imaging science. Mathematical tools have been widely used in image restoration, where wavelet frame based approach is one of the successful examples. In this paper, we introduce a generic wavelet frame based image restoration model, called the "general model", which includes most of the existing wavelet frame based models as special cases. Moreover, the general model also includes examples that are new to the literature. Motivated by our earlier studies [1-3], We provide an asymptotic analysis of the general model as image resolution goes to infinity, which establishes a connection between the general model in discrete setting and a new variatonal model in continuum setting. The variational model also includes some of the existing variational models as special cases, such as the total generalized variational model proposed by [4]. In the end, we introduce an algorithm solving the general model and present one numerical simulation as an example

    Coherence retrieval using trace regularization

    Full text link
    The mutual intensity and its equivalent phase-space representations quantify an optical field's state of coherence and are important tools in the study of light propagation and dynamics, but they can only be estimated indirectly from measurements through a process called coherence retrieval, otherwise known as phase-space tomography. As practical considerations often rule out the availability of a complete set of measurements, coherence retrieval is usually a challenging high-dimensional ill-posed inverse problem. In this paper, we propose a trace-regularized optimization model for coherence retrieval and a provably-convergent adaptive accelerated proximal gradient algorithm for solving the resulting problem. Applying our model and algorithm to both simulated and experimental data, we demonstrate an improvement in reconstruction quality over previous models as well as an increase in convergence speed compared to existing first-order methods.Comment: 28 pages, 10 figures, accepted for publication in SIAM Journal on Imaging Science

    Adaptive low rank and sparse decomposition of video using compressive sensing

    Full text link
    We address the problem of reconstructing and analyzing surveillance videos using compressive sensing. We develop a new method that performs video reconstruction by low rank and sparse decomposition adaptively. Background subtraction becomes part of the reconstruction. In our method, a background model is used in which the background is learned adaptively as the compressive measurements are processed. The adaptive method has low latency, and is more robust than previous methods. We will present experimental results to demonstrate the advantages of the proposed method.Comment: Accepted ICIP 201

    A Singular Value Thresholding Algorithm for Matrix Completion

    Get PDF
    This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices {X^k,Y^k}, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X^k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which 1,000 × 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for ℓ_1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms
    corecore